intelligent robot and system
Analyzing Collision Rates in Large-Scale Mixed Traffic Control via Multi-Agent Reinforcement Learning
Vehicle collisions remain a major challenge in large-scale mixed traffic systems, especially when human-driven vehicles (HVs) and robotic vehicles (RVs) interact under dynamic and uncertain conditions. Although Multi-Agent Reinforcement Learning (MARL) offers promising capabilities for traffic signal control, ensuring safety in such environments remains difficult. As a direct indicator of traffic risk, the collision rate must be well understood and incorporated into traffic control design. This study investigates the primary factors influencing collision rates in a MARL-governed Mixed Traffic Control (MTC) network. We examine three dimensions: total vehicle count, signalized versus unsignalized intersection configurations, and turning-movement strategies. Through controlled simulation experiments, we evaluate how each factor affects collision likelihood. The results show that collision rates are sensitive to traffic density, the level of signal coordination, and turning-control design. These findings provide practical insights for improving the safety and robustness of MARL-based mixed traffic control systems, supporting the development of intelligent transportation systems in which both efficiency and safety are jointly optimized.
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- Government > Regional Government > North America Government > United States Government (0.48)
A Learning-based Control Methodology for Transitioning VTOL UAVs
Lin, Zexin, Zhong, Yebin, Wan, Hanwen, Cheng, Jiu, Sun, Zhenglong, Ji, Xiaoqiang
Transition control poses a critical challenge in Vertical Take-Off and Landing Unmanned Aerial Vehicle (VTOL UAV) development due to the tilting rotor mechanism, which shifts the center of gravity and thrust direction during transitions. Current control methods' decoupled control of altitude and position leads to significant vibration, and limits interaction consideration and adaptability. In this study, we propose a novel coupled transition control methodology based on reinforcement learning (RL) driven controller. Besides, contrasting to the conventional phase-transition approach, the ST3M method demonstrates a new perspective by treating cruise mode as a special case of hover. We validate the feasibility of applying our method in simulation and real-world environments, demonstrating efficient controller development and migration while accurately controlling UAV position and attitude, exhibiting outstanding trajectory tracking and reduced vibrations during the transition process.
- Asia > China > Guangdong Province > Shenzhen (0.05)
- North America > Canada > British Columbia > Vancouver (0.05)
- Europe > Spain > Galicia > Madrid (0.04)
- (3 more...)
LPVIMO-SAM: Tightly-coupled LiDAR/Polarization Vision/Inertial/Magnetometer/Optical Flow Odometry via Smoothing and Mapping
Shan, Derui, Guo, Peng, Li, Wenshuo, Tao, Du
We propose a tightly-coupled LiDAR/Polarization Vision/Inertial/Magnetometer/Optical Flow Odometry via Smoothing and Mapping (LPVIMO-SAM) framework, which integrates LiDAR, polarization vision, inertial measurement unit, magnetometer, and optical flow in a tightly-coupled fusion. This framework enables high-precision and highly robust real-time state estimation and map construction in challenging environments, such as LiDAR-degraded, low-texture regions, and feature-scarce areas. The LPVIMO-SAM comprises two subsystems: a Polarized Vision-Inertial System and a LiDAR/Inertial/Magnetometer/Optical Flow System. The polarized vision enhances the robustness of the Visual/Inertial odometry in low-feature and low-texture scenarios by extracting the polarization information of the scene. The magnetometer acquires the heading angle, and the optical flow obtains the speed and height to reduce the accumulated error. A magnetometer heading prior factor, an optical flow speed observation factor, and a height observation factor are designed to eliminate the cumulative errors of the LiDAR/Inertial odometry through factor graph optimization. Meanwhile, the LPVIMO-SAM can maintain stable positioning even when one of the two subsystems fails, further expanding its applicability in LiDAR-degraded, low-texture, and low-feature environments. Code is available on https://github.com/junxiaofanchen/LPVIMO-SAM.
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- North America > United States > Oklahoma > Beaver County (0.04)
- Asia > Macao (0.04)
- Asia > China > Beijing > Beijing (0.04)
Reinforcement Learning for Robotic Safe Control with Force Sensing
Lin, Nan, Zhang, Linrui, Chen, Yuxuan, Chen, Zhenrui, Zhu, Yujun, Chen, Ruoxi, Wu, Peichen, Chen, Xiaoping
-- For the task with complicated manipulation in unstructured environments, traditional hand-coded methods are ineffective, while reinforcement learning can provide more general and useful policy. Although the reinforcement learning is able to obtain impressive results, its stability and reliability is hard to guarantee, which would cause the potential safety threats. Besides, the transfer from simulation to real-world also will lead in unpredictable situations. T o enhance the safety and reliability of robots, we introduce the force and haptic perception into reinforcement learning. We demonstrate that the force-based reinforcement learning method can be more adaptive to environment, especially in sim-to-real transfer . Experimental results show in object pushing task, our strategy is safer and more efficient in both simulation and real world, thus it holds prospects for a wide variety of robotic applications.
Coordinating Spinal and Limb Dynamics for Enhanced Sprawling Robot Mobility
Atasever, Merve, Okhovat, Ali, Nazaripouya, Azhang, Nisbet, John, Kurkutlu, Omer, Deshmukh, Jyotirmoy V., Aydin, Yasemin Ozkan
Sprawling locomotion in vertebrates, particularly salamanders, demonstrates how body undulation and spinal mobility enhance stability, maneuverability, and adaptability across complex terrains. While prior work has separately explored biologically inspired gait design or deep reinforcement learning (DRL), these approaches face inherent limitations: open-loop gait designs often lack adaptability to unforeseen terrain variations, whereas end-to-end DRL methods are data-hungry and prone to unstable behaviors when transferring from simulation to real robots. We propose a hybrid control framework that integrates Hildebrand's biologically grounded gait design with DRL, enabling a salamander-inspired quadruped robot to exploit active spinal joints for robust crawling motion. Our evaluation across multiple robot configurations in target-directed navigation tasks reveals that this hybrid approach systematically improves robustness under environmental uncertainties such as surface irregularities. By bridging structured gait design with learning-based methodology, our work highlights the promise of interdisciplinary control strategies for developing efficient, resilient, and biologically informed spinal actuation in robotic systems.
- North America > United States > California (0.86)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe (0.04)
- Leisure & Entertainment (0.47)
- Education > Educational Setting > Higher Education (0.40)
Visibility-aware Cooperative Aerial Tracking with Decentralized LiDAR-based Swarms
Yin, Longji, Ren, Yunfan, Zhu, Fangcheng, Shi, Liuyu, Kong, Fanze, Tang, Benxu, Liu, Wenyi, Lyu, Ximin, Zhang, Fu
Abstract--Autonomous aerial tracking with drones offers vast potential for surveillance, cinematography, and industrial inspection applications. While single-drone tracking systems have been extensively studied, swarm-based target tracking remains underexplored, despite its unique advantages of distributed perception, fault-tolerant redundancy, and multidirectional target coverage. T o bridge this gap, we propose a novel decentralized LiDAR-based swarm tracking framework that enables visibility-aware, cooperative target tracking in complex environments, while fully harnessing the unique capabilities of swarm systems. T o address visibility, we introduce a novel Spherical Signed Distance Field (SSDF)-based metric for 3-D environmental occlusion representation, coupled with an efficient algorithm that enables real-time onboard SSDF updating. A general Field-of-View (FOV) alignment cost supporting heterogeneous LiDAR configurations is proposed for consistent target observation. These innovations are integrated into a hierarchical planner, combining a kinodynamic front-end searcher with a spatiotemporal SE(3) back-end optimizer to generate collision-free, visibility-optimized trajectories. The proposed approach undergoes thorough evaluation through comprehensive benchmark comparisons and ablation studies. Deployed on heterogeneous LiDAR swarms, our fully decentralized implementation features collaborative perception, distributed planning, and dynamic swarm reconfigurability. V alidated through rigorous real-world experiments in cluttered outdoor environments, the proposed system demonstrates robust cooperative tracking of agile targets (drones, humans) while achieving superior visibility maintenance. This work establishes a systematic solution for swarm-based target tracking, and its source code will be released to benefit the community. Recent studies highlight the unique suitability of UA Vs for tracking dynamic targets in complex environments, owing to their highly agile three-dimensional (3-D) maneuverability. While substantial progress has been made in single-UA V tracking, the swarm-based aerial tracking remains underexplored. The authors are with the Department of Mechanical Engineering, The University of Hong Kong, Hong Kong. X. Lyu is with the School of Intelligent System Engineering, Sun Y at-sen University, Shenzhen, China. A swarm of four autonomous drones is cooperatively tracking a human runner using heterogeneous LiDAR configurations. The LiDAR setup consists of one upward-facing Mid360 LiDAR (marked by blue dashed lines), one downward-facing Mid360 LiDAR (green dashed lines), and two Avia LiDARs (red dashed lines). The swarm forms a 3-D distribution to track the target, with each tracker positioned optimally to suit its FOV settings. Effective agile aerial tracking with autonomous swarms primarily relies on three criteria: visibility, coordination, and portability.
- Asia > China > Hong Kong (0.44)
- Asia > China > Guangdong Province > Shenzhen (0.24)
- Europe > Norway > Norwegian Sea (0.04)
- Transportation (0.67)
- Aerospace & Defense (0.67)
- Information Technology > Robotics & Automation (0.46)
Quality-guided UAV Surface Exploration for 3D Reconstruction
Sportich, Benjamin, Boubakri, Kenza, Simonin, Olivier, Renzaglia, Alessandro
Abstract-- Reasons for mapping an unknown environment with autonomous robots are wide-ranging, but in practice, they are often overlooked when developing planning strategies. Rapid information gathering and comprehensive structural assessment of buildings have different requirements and therefore necessitate distinct methodologies. In this paper, we propose a novel modular Next-Best-View (NBV) planning framework for aerial robots that explicitly uses a reconstruction quality objective to guide the exploration planning. In particular, our approach introduces new and efficient methods for view generation and selection of viewpoint candidates that are adaptive to the user-defined quality requirements, fully exploiting the uncertainty encoded in a Truncated Signed Distance field (TSDF) representation of the environment. This results in informed and efficient exploration decisions tailored towards the predetermined objective. We demonstrate that it successfully adjusts its behavior to the user goal while consistently outperforming conventional NBV strategies in terms of coverage, quality of the final 3D map and path efficiency. Autonomous exploration for 3D reconstruction is a fundamental task in robotics, with critical real-world applications such as infrastructure inspection, mapping, environmental monitoring, and search-and-rescue missions [1]. In these scenarios, Unmanned Aerial V ehicles (UA Vs) equipped with onboard visual and range sensors have proven essential for efficiently navigating complex environments and capturing aerial perspectives that enable comprehensive 3D reconstructions or a swift survey of an area.
Splatblox: Traversability-Aware Gaussian Splatting for Outdoor Robot Navigation
Chopra, Samarth, Liang, Jing, Seneviratne, Gershom, Lee, Yonghan, Choi, Jaehoon, An, Jianyu, Cheng, Stephen, Manocha, Dinesh
We present Splatblox, a real-time system for autonomous navigation in outdoor environments with dense vegetation, irregular obstacles, and complex terrain. Our method fuses segmented RGB images and LiDAR point clouds using Gaussian Splatting to construct a traversability-aware Euclidean Signed Distance Field (ESDF) that jointly encodes geometry and semantics. Updated online, this field enables semantic reasoning to distinguish traversable vegetation (e.g., tall grass) from rigid obstacles (e.g., trees), while LiDAR ensures 360-degree geometric coverage for extended planning horizons. We validate Splatblox on a quadruped robot and demonstrate transfer to a wheeled platform. In field trials across vegetation-rich scenarios, it outperforms state-of-the-art methods with over 50% higher success rate, 40% fewer freezing incidents, 5% shorter paths, and up to 13% faster time to goal, while supporting long-range missions up to 100 meters. Experiment videos and more details can be found on our project page: https://splatblox.github.io
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- Europe > Portugal (0.04)
- Asia (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Locomotion (0.48)
Stable Multi-Drone GNSS Tracking System for Marine Robots
Wen, Shuo, Meriaux, Edwin, Guzmán, Mariana Sosa, Wang, Zhizun, Shi, Junming, Dudek, Gregory
Abstract-- Accurate localization is essential for marine robotics, yet Global Navigation Satellite System (GNSS) signals are unreliable or unavailable even at a very short distance below the water surface. Traditional alternatives, such as inertial navigation, Doppler V elocity Loggers (DVL), SLAM, and acoustic methods, suffer from error accumulation, high computational demands, or infrastructure dependence. In this work, we present a scalable multi-drone GNSS-based tracking system for surface and near-surface marine robots. Our approach combines efficient visual detection, lightweight multi-object tracking, GNSS-based triangulation, and a confidence-weighted Extended Kalman Filter (EKF) to provide stable GNSS estimation in real time. We further introduce a cross-drone tracking ID alignment algorithm that enforces global consistency across views, enabling robust multi-robot tracking with redundant aerial coverage. We validate our system in diversified complex settings to show the scalability and robustness of the proposed algorithm. While satellite-based positioning is widely accepted for surface marine robots, its effectiveness diminishes once the robots descend even a very short distance below the ocean surface or if the antenna is wet with salt water.
- Transportation (0.46)
- Information Technology (0.46)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.87)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.68)